2 research outputs found
AI on the Water: Applying DRL to Autonomous Vessel Navigation
Human decision-making errors cause a majority of globally reported marine
accidents. As a result, automation in the marine industry has been gaining more
attention in recent years. Obstacle avoidance becomes very challenging for an
autonomous surface vehicle in an unknown environment. We explore the
feasibility of using Deep Q-Learning (DQN), a deep reinforcement learning
approach, for controlling an underactuated autonomous surface vehicle to follow
a known path while avoiding collisions with static and dynamic obstacles. The
ship's motion is described using a three-degree-of-freedom (3-DOF) dynamic
model. The KRISO container ship (KCS) is chosen for this study because it is a
benchmark hull used in several studies, and its hydrodynamic coefficients are
readily available for numerical modelling. This study shows that Deep
Reinforcement Learning (DRL) can achieve path following and collision avoidance
successfully and can be a potential candidate that may be investigated further
to achieve human-level or even better decision-making for autonomous marine
vehicles.Comment: Proceedings of the Sixth International Conference in Ocean
Engineering (ICOE2023
Comparison of path following in ships using modern and traditional controllers
Vessel navigation is difficult in restricted waterways and in the presence of
static and dynamic obstacles. This difficulty can be attributed to the
high-level decisions taken by humans during these maneuvers, which is evident
from the fact that 85% of the reported marine accidents are traced back to
human errors. Artificial intelligence-based methods offer us a way to eliminate
human intervention in vessel navigation. Newer methods like Deep Reinforcement
Learning (DRL) can optimize multiple objectives like path following and
collision avoidance at the same time while being computationally cheaper to
implement in comparison to traditional approaches. Before addressing the
challenge of collision avoidance along with path following, the performance of
DRL-based controllers on the path following task alone must be established.
Therefore, this study trains a DRL agent using Proximal Policy Optimization
(PPO) algorithm and tests it against a traditional PD controller guided by an
Integral Line of Sight (ILOS) guidance system. The Krisco Container Ship (KCS)
is chosen to test the different controllers. The ship dynamics are
mathematically simulated using the Maneuvering Modelling Group (MMG) model
developed by the Japanese. The simulation environment is used to train the deep
reinforcement learning-based controller and is also used to tune the gains of
the traditional PD controller. The effectiveness of the controllers in the
presence of wind is also investigated.Comment: Proceedings of the Sixth International Conference in Ocean
Engineering (ICOE2023